We define the optimal detector as the detector with the highest detection rate of post-merger signals. A detection is counted as an SNR > 5 which is plausible in the literature (see pp. 10 of [Dietrich et al. 2020] for further details). SNR is calculated using bilby by injecting NR (strain) waveforms of BNS mergers (cropped to the post-merger phase) into each possible detector. The detector response depends on the injection parameters e.g. distance from merger event SNR $\propto 1/d$. All detection rates below are for single detectors i.e. not in a network!
Instructions: Use the notebook ResultsSupplementary.ipynb to setup the CoRE database and ensure Mallika's code is in the parent directory. Also ensure that bilby is version 1.1.3! (newer versions will require NS masses in injection parameters)
TODO:
Explore alternative figure of merit(s) for possible detector designs. For example, a simple extension would be to maximise the detection rate in a network with other interferometric detectors.
A lot of the functions in the BNS_Optimisation_Module_Randomised module could be rewritten using the module watpy recommended by the authors of the CoRe database to future-proof the code.
Correct waveform scaling to make use of the full waveform set available.
Calculate detection rate in a network!
import finesse
import numpy as np
import plotly.graph_objects as go
import plotly.express as px
import nemo_optimisation_modules as nom
import plotly.io as pio
pio.renderers.default='notebook'
The explored parameters are: common-mode tuning ($\phi_\text{comm}$), differential-mode tuning ($\phi_\text{diff}$), SRC length ($L_\text{SRC}$), ITM transmittivity ($T_\text{ITM}$). Parameter intervals are stored in variables of the form e.g. vary_phiCommand are logarithmically spaced and divided into 10 values i.e. $10^4$ detector configurations. The function Finesse_sensitivity_into_txt in nemo_optimisation_modules is used to export the ASD (qnoise) curve for each detector configuration. Each curve has the same logarithmic frequency scale $100\text{Hz}\leq f\leq10\text{kHz}$. We include 7dB of squeezing. The naming convention for the text files is: vSRM_i,j,k,l_phiComm_phiDiff_srcL_itmT_prmT_lasPow_ASD_with_RP.txt (i,j,k,l are indices for parameter intervals).
Instructions: Before running, create a folder in the parent directory and replace the curves_folder variable respectively (text files are stored here).
TODO: Rerun the sensitivity curves with smaller intervals surrounding the optimal design in Section 4.3 with higher resolution.
import nemo_optimisation_modules as nom
import numpy as np
import time as t
vary_phiComm = np.geomspace(1e-1,180,10)
vary_phiDiff = np.geomspace(1e-1,180,10)
vary_srcL = np.geomspace(10,1e3,10)
vary_itmT = np.geomspace(1e-2,0.5,10)
changed_itmT = False
store_prmT = 0
store_lasPow = 0
curves_folder
start_time = t.time()
for i, itmT in enumerate(vary_itmT):
changed_itmT = True
for j, srcL in enumerate(vary_srcL):
for k, phiDiff in enumerate(vary_phiDiff):
for l, phiComm in enumerate(vary_phiComm):
if changed_itmT:
print('itmT changed')
prmT, lasPow = nom.Finesse_sensitivity_into_txt(params_idx=[i,j,k,l],save_path="./FinalSensitivityCurvesSqueezed/",phiComm=phiComm,phiDiff=phiDiff,srcL=srcL,itmT=itmT,prmT=0,lasPow=0,optimise_prmT=True,squeezing=True)
store_prmT = prmT
store_lasPow = lasPow
changed_itmT = False
else:
nom.Finesse_sensitivity_into_txt(params_idx=[i,j,k,l],save_path="./FinalSensitivityCurvesSqueezed/",phiComm=phiComm,phiDiff=phiDiff,srcL=srcL,itmT=itmT,prmT=store_prmT,lasPow=store_lasPow,optimise_prmT=False,squeezing=True)
print(t.time() - start_time)
See Mallika's report for more details (and acknowledgements for using/adapting their code)! Here is a brief explanation (more for my own clarity, but hopefully the reader finds it helpful). For each of the 10,000 detector designs (ASD curves generated in Section 4.1 and saved in text files), the function IFOmakerFromASDArray in BNS_Optimisation_Module_Randomised is used to create an Interferometer object in bilby. We take the NR waveform set (indexed by the variable index) and duplicate it repeat_waveforms times i.e. we have len(index)*repeat_waveforms total BNS merger events. We simulate an observing run of these BNS merger events randomly distributed in space up to a detector horizon (detector_horizon by default is 400 Mpc c.f. 100-200 Mpc for LIGO) over an observing time by randomising injection parameters used by bilby to calculate the detector response. The function calculate_SNR in BNS_Optimisation_Module_Randomised calculates the number of detections during the observing run (SNR > 5) and the detection rate is normalised to units of $\text{yr}^{-1}$. A key parameter here is the assumed BNS merger rate! (merger_rate has units of $\text{Gpc}^{-3}.\text{yr}^{-1}$).
Because there are so many possible detectors, we take only 1 observing run of 3 waveform set repetitions! The code in the cell immediately below was duplicated 3 times each for merger rates of $295.7\text{Gpc}^{-3}.\text{yr}^{-1}$ (high), $105.5\text{Gpc}^{-3}.\text{yr}^{-1}$ (mid), $21.6\text{Gpc}^{-3}.\text{yr}^{-1}$ (low) according to [Abbott et al. 2022] and was run simultaneously using separate python kernels (WARNING: very CPU-intensive!) i.e. 3 observing runs x 3 waveform set repetitions for each merger rate.
Instructions: Change the curves_path variable to the directory with the ASD text files. Copy and paste code into new notebooks to run searches simultaneously. Create a folder in parent directory to store numpy array and replace the run_path variable respectively. Change the save_name variable for the filename (e.g. search_mid_1 means the mid merger rate and the 1st search).
TODO:
Speed-up the grid search using multi-processing in python e.g. https://github.com/johanneseder711/Parallelization for Mac (current runs take nearly 24h).
Scrutinise the BNS merger rate more closely in the literature.
Explore different search algorithms e.g. particle swarm.
Investigate the distance randomisation of BNS mergers (currently calculated using merger_rates_for_mallika.py) and the artefacts of binning.
import nemo_optimisation_modules as nom
import numpy as np
import BNS_Optimisation_Module_Randomised as bnso #import module
import bilby
import h5py
import numpy as np
import gwinc
import astropy.units as u
import astropy.constants as c
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import os
from csv import writer
import pandas as pd
import time as t
df, index = bnso.sim_index(sim_data_name = 'core_database_index_grid_spacing_min_removefpeak1.5to4.csv');
h5_name = "NR_base_final.hdf5"
bnso.make_h5(h5_name);
duration, sampling_frequency, injection_parameters = bnso.initial_parameters();
for i in index: #50% sure putting a for loop inside a function might be a bad idea so setup the for loop to loop over the
#NR simulations here
bnso.collect(i, df, duration, sampling_frequency,injection_parameters, h5_name ,None); #collect the data for each simulation
#and append to the h5 file
detrate_ndarr = np.zeros((10,10,10,10))
fsig = np.geomspace(100,10e3,201)
curves_path = './FinalSensitivityCurvesSqueezed/'
run_path = './ObservingRuns/'
IFO_files = os.listdir(curves_path)
detector_horizon = 400*u.Mpc
merger_rate = 105.5
save_name = 'search_mid_1.npy'
start_time = t.time()
for i in range(10):
for j in range(10):
for k in range(10):
for l in range(10):
match_ASD = [ifo for ifo in IFO_files if f"{i},{j},{k},{l}" in ifo]
if len(match_ASD) > 1:
print(f"Found a file with non-unique file! Check index: {i},{j},{k},{l}")
ASD_path = match_ASD[0]
ASDarr = np.array([float(line.rstrip()) for line in open(curves_path+ASD_path, 'r')])
PMS_filter = [i for i in range(len(fsig)) if fsig[i]>2e3 and fsig[i]<4e3]
ASD_filtered = ASDarr[PMS_filter]
if all([asd > 1e-20 for asd in ASD_filtered]):
continue
IFO, name = bnso.IFOmakerFromASDArray('Config_1',duration,sampling_frequency, 0., fsig, ASDarr); #define the IFO
detratelist = []
observing_runs = 1
for m in range(0,observing_runs): #number of rates calculated
repeat_waveforms = 3 #number of observing_times you want the full waveform set repeated
total_events = list(index)*repeat_waveforms #creates an index where each number is repeated repeat_waveforms amount of observing_times
np.random.shuffle(total_events) #shuffles the total_events for each loop
observing_time, random_param = bnso.random_param(df,detector_horizon,merger_rate, scalewavno=repeat_waveforms); #creates set of random injection params
#for the merger event rate and the corresponding scaling observing_time used to get the number of waveforms
SNRlist = []
for n in range(0,len(total_events)): #this creates the list of SNRs for all the injection parameters
AusIFO = IFO;
injection_parameters = dict(distance=random_param['distance'][n], phase=random_param['phase'][n], ra=random_param['ra'][n],
dec=random_param['dec'][n], psi=random_param['psi'][n], t0=0., geocent_time=0.)
SNR = bnso.calc_SNR(total_events[n], duration,sampling_frequency, injection_parameters,AusIFO,df);
SNRlist.append(SNR)
detratescaled = sum(n > 5 for n in SNRlist) #finds the number of entries in the SNR list above thresshold
detrate = detratescaled/observing_time #rescales to find the detrate per year
detratelist.append(detrate)
detrate_ndarr[i,j,k,l] = np.mean(detratelist)
end_time = t.time()
np.save(run_path+save_name,detrate_ndarr)
For each assumed merger rate search (over the 10,000 detectors), average over the 3 observing runs and print the parameters that yield the highest detection rate. For reference, they are: $\phi_\text{comm}=0^\circ$, $\phi_\text{diff}=2.797613918929973^\circ$, $L_\text{SRC}=599.4842503189409\text{m}$, $T_\text{ITM}=3.684031498640387\%$. Given, $T_\text{ITM}=3.684031498640387\%$, the impedance-matching condition and 4.5MW arm power condition implies $T_\text{PRM}\simeq1\%$ (WARNING: lower limit of the impedance-matching interval!) and $P_\text{Las}\simeq 510\text{W}$ (resolution of 1W)
TODO:
Decrease the lower limit and increase the resolution of the prmT interval of the function find_optimal_prmT in nemo_optimisation_modules.
With the ITM transmittivity higher, the power incident on the beamsplitter is also higher and thermo-optic effects may become significant i.e. quantum noise limited-sensitivity assumption may be invalid! (comments from Johannes)
import numpy as np
vary_phiComm = np.geomspace(1e-1,180,10)
vary_phiDiff = np.geomspace(1e-1,180,10)
vary_srcL = np.geomspace(10,1e3,10)
vary_itmT = np.geomspace(1e-2,0.5,10)
print('Low merger rate')
run_low_1 = np.load('./ObservingRuns/search_low_1.npy')
run_low_2 = np.load('./ObservingRuns/search_low_2.npy')
run_low_3 = np.load('./ObservingRuns/search_low_3.npy')
run_low = (run_low_1+run_low_2+run_low_3)/3
optimal_idxs_low = np.unravel_index(np.argmax(run_low), run_low.shape)
print(f"Optimal indices: {optimal_idxs_low}")
print(f"itmT: {vary_itmT[optimal_idxs_low[0]]}, srcL: {vary_srcL[optimal_idxs_low[1]]}m, phiDiff: {vary_phiDiff[optimal_idxs_low[2]]}deg, phiComm: {vary_phiComm[optimal_idxs_low[3]]}deg")
print('Mid merger rate')
run_mid_1 = np.load('./ObservingRuns/search_mid_1.npy')
run_mid_2 = np.load('./ObservingRuns/search_mid_2.npy')
run_mid_3 = np.load('./ObservingRuns/search_mid_3.npy')
run_mid = (run_mid_1+run_mid_2+run_mid_3)/3
optimal_idxs_mid = np.unravel_index(np.argmax(run_mid), run_mid.shape)
print(f"Optimal indices: {optimal_idxs_mid}")
print(f"itmT: {vary_itmT[optimal_idxs_mid[0]]}, srcL: {vary_srcL[optimal_idxs_mid[1]]}m, phiDiff: {vary_phiDiff[optimal_idxs_mid[2]]}deg, phiComm: {vary_phiComm[optimal_idxs_mid[3]]}")
print('High merger rate')
run_high_1 = np.load('./ObservingRuns/search_high_1.npy')
run_high_2 = np.load('./ObservingRuns/search_high_2.npy')
run_high_3 = np.load('./ObservingRuns/search_high_3.npy')
run_high = (run_high_1+run_high_2+run_high_3)/3
optimal_idxs_high = np.unravel_index(np.argmax(run_high), run_high.shape)
print(f"Optimal indices: {optimal_idxs_high}")
print(f"itmT: {vary_itmT[optimal_idxs_high[0]]}, srcL: {vary_srcL[optimal_idxs_high[1]]}m, phiDiff: {vary_phiDiff[optimal_idxs_high[2]]}deg,phiComm: {vary_phiComm[optimal_idxs_high[3]]}deg")
Low merger rate Optimal indices: (3, 8, 4, 9) itmT: 0.03684031498640387, srcL: 599.4842503189409m, phiDiff: 2.797613918929973deg, phiComm: 180.0deg Mid merger rate Optimal indices: (3, 8, 5, 9) itmT: 0.03684031498640387, srcL: 599.4842503189409m, phiDiff: 6.434054348315725deg, phiComm: 180.0 High merger rate Optimal indices: (3, 8, 4, 9) itmT: 0.03684031498640387, srcL: 599.4842503189409m, phiDiff: 2.797613918929973deg,phiComm: 180.0deg
For completion, we include the complete Finesse model for the optimal detector and plot and print all the outputs. For reference, the output is:
Optimal prmT*: 0.01
Peak Sensitivity: 3.711640838921166e-25 1/rt Hz, Peak Frequency: 2951.209226666384Hz, Peak FWHM: 1656.8774899212108Hz
Input laser power: 509.99999999999994W
PRC power: 84.38429592002166kW
Laser power incident on BS: 84.84931186960083kW
X-arm cavity power: 4.508646983312129MW
Y-arm cavity power: 4.5086469833121265MW
SRC power: 2.941096946444895e-05W
Instructions: As far as I am aware, the most updated noise budget for NEMO is found in OzHF_NoiseBudget_RadCooling_TotalNoise.txt. Ensure this file is in the parent directory.
import finesse
import ipywidgets as widgets
from ipywidgets import HBox, VBox
import matplotlib.pyplot as plt
from IPython.display import display
import numpy as np
import plotly.graph_objects as go
import plotly.express as px
import pandas as pd
from scipy.signal import find_peaks, peak_widths
from scipy import interpolate
fsig = np.geomspace(100,10e3,201)
peak_sens = 0
peak_f = 0
peak_bw = 0
left_f = 0
right_f = 0
kat = finesse.Model()
# From SRC_tunability_DCreadout_changedBW_fin3exp.ipynb
kat.parse(
f"""
# NEMO Base Model (simplified from OzHF_ITMloss_4km.kat and converted to Finesse 3)
###########################################################################
### Variables
###########################################################################
var Larm 4000
var Mtm 74.1 # from NEMO paper (94.4 in OzHF_ITMloss_4km.kat)
var itmT 0.03684031498640387
var lmichx 4.5
var lmichy 4.45
###########################################################################
### Input optics
###########################################################################
l L0 510
s l_in L0.p1 prm.p1
# Power recycling mirror
m prm T=0.01 L=2e-05 phi=90
s prc prm.p2 bs.p1 L=53
# Central beamsplitter
bs bs R=0.4999625 T=0.4999625 alpha=45
# CHECK Input laser power
pd P_in L0.p1.o
# CHECK Laser power incident on BS
pd P_BS bs.p1.i
# CHECK PRC Power
pd P_PRC bs.p1.o
###########################################################################
### X arm
###########################################################################
s lx bs.p3 itmxar.p1 L=lmichx
m itmxar T=1-265.0e-06 L=265.0e-06 phi=180 # phi from SRC_tunability_DCreadout_changedBW_fin3exp.ipynb (0.0 in OzHF_ITMloss_4km.kat)
s ar_thick itmxar.p2 itmx.p1 L=0
m itmx T=itmT L=20u phi=180
s LX itmx.p2 etmx.p1 L=Larm
m etmx T=5u L=20u phi=179.99999
pendulum itmx_sus itmx.mech mass=Mtm fz=1 Qz=1M
pendulum etmx_sus etmx.mech mass=Mtm fz=1 Qz=1M
# CHECK X-arm cavity power
pd P_armX etmx.p1.i
###########################################################################
### Y arm
###########################################################################
s ly bs.p2 itmyar.p1 L=lmichy
m itmyar T=1-265.0e-06 L=265.0e-06 phi=90 # phi from SRC_tunability_DCreadout_changedBW_fin3exp.ipynb (0.0 in OzHF_ITMloss_4km.kat)
s ar_thicky itmyar.p2 itmy.p1 L=0
m itmy T=itmT L=20u phi=90
s LY itmy.p2 etmy.p1 L=Larm
m etmy T=5u L=20u phi=90.00001
pendulum itmy_sus itmy.mech mass=Mtm fz=1 Qz=1M
pendulum etmy_sus etmy.mech mass=Mtm fz=1 Qz=1M
# CHECK Y-arm cavity power
pd P_armY etmy.p1.i
###########################################################################
### vSRM
###########################################################################
s src bs.p4 SRC_BS.p1 L=599.4842503189409
bs SRC_BS T=0.5 L=0 alpha=45
s vSRC1 SRC_BS.p2 vSRM1.p1 L=4.5
m vSRM1 T=0 L=0 phi=-90+0+2.797613918929973
s vSRC2 SRC_BS.p3 vSRM2.p1 L=4.5
m vSRM2 T=0 L=0 phi=0-2.797613918929973
# CHECK SRC power
pd P_SRC SRC_BS.p1.i
###########################################################################
### Output & squeezing (from SRC_tunability_DCreadout_changedBW_fin3exp.ipynb)
###########################################################################
dbs OFI
link(SRC_BS.p4, OFI.p1)
readout_dc AS OFI.p3.o
# A squeezed source could be injected into the dark port
sq sqz db=-7 angle=90
link(sqz, OFI.p2)
# ------------------------------------------------------------------------------
# Degrees of Freedom
# ------------------------------------------------------------------------------
dof STRAIN LX.dofs.h +1 LY.dofs.h -1
# signal generator
sgen sig STRAIN
qnoised NSR_with_RP AS.p1.i nsr=True
qshot NSR_without_RP AS.p1.i nsr=True
pd1 signal AS.p1.i f=fsig
fsig(1)
xaxis(fsig, log, 100, 10k, 200)
"""
)
out = kat.run()
neg_sens = -np.abs(out['NSR_with_RP']) # Take reciprocal sensitivity to use findpeaks i.e. FWHM defined by using reciprocal sensitivity!
peak_idxs, _ = find_peaks(neg_sens)
if peak_idxs.size != 0: # If a peak is found (first peak taken)
fwhm_idxs = peak_widths(neg_sens, peak_idxs, rel_height=0.5)
left_idx = fwhm_idxs[2][0]
right_idx = fwhm_idxs[3][0]
interp_fsig = interpolate.interp1d(np.arange(201), fsig)
left_f = interp_fsig(left_idx)
right_f = interp_fsig(right_idx)
peak_sens = np.abs(out['NSR_with_RP'])[peak_idxs[0]]
peak_f = fsig[peak_idxs[0]]
peak_bw = right_f - left_f
fig_qnoise = go.Figure()
fig_qnoise.add_trace(go.Scatter(x=fsig, y=np.abs(out['NSR_with_RP']),mode='lines+markers',name='NEMO+ qnoised NSR'))
fig_qnoise.add_trace(go.Scatter(x=fsig, y=np.abs(out['NSR_without_RP']),mode='lines+markers',name='NEMO+ qshot NSR'))
# Compare to NEMO (2020)
compare_path = 'OzHF_NoiseBudget_RadCooling_TotalNoise.txt'
compare_df = pd.read_csv(compare_path, sep=' ',header=None)
compare_arr = np.sqrt(np.array(compare_df[1]))[300:]
compare_f = np.array(compare_df[0])[300:]
fig_qnoise.add_trace(go.Scatter(x=compare_f, y=compare_arr,mode='lines+markers',name='NEMO'))
fig_qnoise.update_xaxes(type="log")
fig_qnoise.update_yaxes(type="log")
fig_qnoise.add_vline(x=peak_f)
fig_qnoise.add_vline(x=right_f,line_dash='dash',line_color='green')
fig_qnoise.add_vline(x=left_f,line_dash='dash',line_color='green')
fig_qnoise.update_layout(title="ASD (qnoised, qshot)",xaxis_title="Frequency [Hz]",yaxis_title="Sensitivity [1/rt Hz]")
fig_qnoise.show()
fig_signal = go.Figure()
fig_signal.add_trace(go.Scatter(x=fsig, y=np.abs(out['signal']),mode='lines+markers'))
fig_signal.update_xaxes(type="log")
fig_signal.update_yaxes(type="log")
fig_signal.add_vline(x=peak_f)
fig_signal.add_vline(x=right_f,line_dash='dash',line_color='green')
fig_signal.add_vline(x=left_f,line_dash='dash',line_color='green')
fig_signal.update_layout(title="Signal Gain (pd1)",xaxis_title="Frequency [Hz]",yaxis_title="Power [W]")
fig_signal.show()
# Print model outputs
print("Optimal Detector:")
print(f"Optimal prmT*: 0.01")
print(f"Peak Sensitivity: {peak_sens} 1/rt Hz, Peak Frequency: {peak_f}Hz, Peak FWHM: {peak_bw}Hz")
print(f"Input laser power: {np.max(np.abs(out['P_in']))}W")
print(f"PRC power: {np.max(np.abs(out['P_PRC']))*1e-3}kW")
print(f"Laser power incident on BS: {np.max(np.abs(out['P_BS']))*1e-3}kW")
print(f"X-arm cavity power: {np.max(np.abs(out['P_armX']))*1e-6}MW")
print(f"Y-arm cavity power: {np.max(np.abs(out['P_armY']))*1e-6}MW")
print(f"SRC power: {np.max(np.abs(out['P_SRC']))}W")
Optimal Detector: Optimal prmT*: 0.01 Peak Sensitivity: 3.711640837982218e-25 1/rt Hz, Peak Frequency: 2951.209226666384Hz, Peak FWHM: 1656.877489882803Hz Input laser power: 509.99999999999994W PRC power: 84.38429592002166kW Laser power incident on BS: 84.84931186960083kW X-arm cavity power: 4.508646983312129MW Y-arm cavity power: 4.5086469833121265MW SRC power: 2.941096946444895e-05W
Below is an adaptation of Mallika's notebook Running.ipynb, in which, given a text file ASD curve, the detection rate is calculated from 50 observing runs of 100 waveform set repetitions. Copy and paste this code as needed for different merger rates or different detectors to make comparisons in Section 4.4. Because the code takes a few hours to run, if you copy and paste into separate notebooks, you can calculate detection rates simultaneously.
Instructions: Replace the text file with the desired ASD text file (frequency scale doesn't matter). If your text file has a leftmost frequency column, I have commented a line below (also square roots from PSD). Change the save_name variable for the filename storing the detection rates (from which you can calculate the mean over 50 detection runs) (e.g. detrate_3849_2020mid.npy means the detection rates for detector with indices 3,8,4,9 under the assumption of the mid merger rate from the 2020 paper [Abbott et al. 2020]).
import nemo_optimisation_modules as nom
import numpy as np
import BNS_Optimisation_Module_Randomised as bnso #import module
import bilby
import h5py
import numpy as np
import gwinc
import astropy.units as u
import astropy.constants as c
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import os
from csv import writer
import pandas as pd
import time as t
df, index = bnso.sim_index(sim_data_name = 'core_database_index_grid_spacing_min_removefpeak1.5to4.csv')
h5_name = "NR_base_final.hdf5"
bnso.make_h5(h5_name)
duration, sampling_frequency, injection_parameters = bnso.initial_parameters()
for i in index: #50% sure putting a for loop inside a function might be a bad idea so setup the for loop to loop over the
#NR simulations here
bnso.collect(i, df, duration, sampling_frequency,injection_parameters, h5_name ,None) #collect the data for each simulation
#and append to the h5 file
detrate_ndarr = np.zeros((10,10,10,10))
fsig = np.geomspace(100,10e3,201)
curves_path = './FinalSensitivityCurvesSqueezed/'
run_path = './ObservingRuns/'
IFO_files = os.listdir(curves_path)
save_name = "detrate_3849_2020high.npy"
ASDarr = np.array([float(line.rstrip()) for line in open(curves_path+'vSRM_3,8,4,9_180.0_2.797613918929973_599.4842503189409_0.03684031498640387_0.01_510.0_ASD_with_RP.txt', 'r')])
IFO, name = bnso.IFOmakerFromASDArray('Config_1',duration,sampling_frequency, 0., fsig, ASDarr); #define the IFO
detratelist = []
observing_runs = 50
for m in range(0,observing_runs): #number of rates calculated
print(m)
repeat_waveforms = 100 #number of observing_times you want the full waveform set repeated
total_events = list(index)*repeat_waveforms #creates an index where each number is repeated repeat_waveforms amount of observing_times
np.random.shuffle(total_events) #shuffles the total_events for each loop
observing_time, random_param = bnso.random_param(df,400*u.Mpc,2810, scalewavno=repeat_waveforms); #creates set of random injection params
#for the merger event rate and the corresponding scaling observing_time used to get the number of waveforms
SNRlist = []
for n in range(0,len(total_events)): #this creates the list of SNRs for all the injection parameters
AusIFO = IFO;
injection_parameters = dict(distance=random_param['distance'][n], phase=random_param['phase'][n], ra=random_param['ra'][n],
dec=random_param['dec'][n], psi=random_param['psi'][n], t0=0., geocent_time=0.)
SNR = bnso.calc_SNR(total_events[n], duration,sampling_frequency, injection_parameters,AusIFO,df)
SNRlist.append(SNR)
detratescaled = sum(n > 5 for n in SNRlist) #finds the number of entries in the SNR list above thresshold
detrate = detratescaled/observing_time #rescales to find the detrate per year
detratelist.append(detrate)
# detrate_ndarr[i,j,k,l] = np.mean(detratelist)
end_time = t.time()
np.save(run_path+save_name,detratelist)
Find the saved detection rate arrays for the optimal detector (filename: vSRM_3,8,4,9_180.0_2.797613918929973_599.4842503189409_0.03684031498640387_0.01_510.0_ASD_with_RP.txt) in ./ObservingRuns and compare to detection rate array for NEMO (filename: OzHF_NoiseBudget_RadCooling_TotalNoise.txt). It is worth emphasising again how critical the assumption of a BNS merger rate is. Regardless, the preliminary results indicate that the optimal detector outperforms the original NEMO w.r.t. BNS post-merger signal detection rate.
TODO: Investigate why the detection rate for OzHF_NoiseBudget_RadCooling_TotalNoise.txt which should correspond to "Config_1" in Figure 9 of Mallika's report is less than expected.
Refer to: The LIGO Scientific Collaboration, the Virgo Collaboration, the KAGRA Collabora- tion, R. Abbott, T. D. Abbott, F. Acernese, and K. et al. Ackley. The population of merging compact binaries inferred using gravitational waves through GWTC-3. arXiv:2111.03634 [astro-ph, physics:gr-qc], February 2022. arXiv: 2111.03634.
import plotly.graph_objects as go
nemo_low = np.mean(np.load('./ObservingRuns/detrate_NEMO_low.npy'))
nemo_mid = np.mean(np.load('./ObservingRuns/detrate_NEMO_mid.npy'))
nemo_high = np.mean(np.load('./ObservingRuns/detrate_NEMO_high.npy'))
det3849_low = np.mean(np.load('./ObservingRuns/detrate_3849_low.npy'))
det3849_mid = np.mean(np.load('./ObservingRuns/detrate_3849_mid.npy'))
det3849_high = np.mean(np.load('./ObservingRuns/detrate_3849_high.npy'))
fig = go.Figure()
fig.add_trace(go.Scatter(
x=['NEMO+'],
y=[det3849_mid],
mode='markers',
marker=dict(size=40),
error_y=dict(
type='data',
symmetric=False,thickness=20,
array=[det3849_high - det3849_mid],
arrayminus=[det3849_mid - det3849_low],width=20)
))
fig.add_trace(go.Scatter(
x=['NEMO (2020)'],
y=[nemo_mid],
mode='markers',
marker=dict(size=40),
error_y=dict(
type='data',
symmetric=False,thickness=20,
array=[nemo_high-nemo_mid],
arrayminus=[nemo_mid-nemo_low],width=20)))
fig.update_layout(
autosize=False,
width=1000,
height=1000,
margin=dict(
l=150,
r=150,
b=150,
t=150,
pad=50
))
fig.update_yaxes(type="log")
fig.update_layout(showlegend=False,font=dict(size=24))
fig.update_layout(title='Detector Sensitivity Comparison',xaxis_title="Configuration",yaxis_title="Detection Rate [1/yr]")
fig.show()
Refer to: B. P. Abbott et al. GW190425: Observation of a compact binary coalescence with total mass ∼ 3.4 msub⊙/sub. The Astrophysical Journal Letters, 892(1):L3, mar 2020. This was the merger rate assumed in [Ackley et al. 2020]. The point between the error bars is just the midpoint here!
import plotly.graph_objects as go
nemo_2020low = np.mean(np.load('./ObservingRuns/detrate_NEMO_2020low.npy'))
nemo_2020high = np.mean(np.load('./ObservingRuns/detrate_NEMO_2020high.npy'))
nemo_2020mid = np.mean([nemo_2020low,nemo_2020high])
det3849_2020low = np.mean(np.load('./ObservingRuns/detrate_3849_2020low.npy'))
det3849_2020high = np.mean(np.load('./ObservingRuns/detrate_3849_2020high.npy'))
det3849_2020mid = np.mean([det3849_2020low,det3849_2020high])
fig = go.Figure()
fig.add_trace(go.Scatter(
x=['NEMO+'],
y=[det3849_2020mid],
mode='markers',
marker=dict(size=40),
error_y=dict(
type='data',
symmetric=False,thickness=20,
array=[det3849_2020high - det3849_2020mid],
arrayminus=[det3849_2020mid - det3849_2020low],width=20)
))
fig.add_trace(go.Scatter(
x=['NEMO (2020)'],
y=[nemo_2020mid],
mode='markers',
marker=dict(size=40),
error_y=dict(
type='data',
symmetric=False,thickness=20,
array=[nemo_2020high-nemo_mid],
arrayminus=[nemo_2020mid-nemo_2020low],width=20)))
fig.update_layout(
autosize=False,
width=1000,
height=1000,
margin=dict(
l=150,
r=150,
b=150,
t=150,
pad=50
))
fig.update_yaxes(type="log")
fig.update_layout(showlegend=False,font=dict(size=24))
fig.update_layout(title='Detector Sensitivity Comparison',xaxis_title="Configuration",yaxis_title="Detection Rate [1/yr]")
fig.show()